Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Proposes a DRL-based pedagogical policy to choose when to present or skip training problems in a logic tutor. Four conditions are compared: control, adaptive DRL, random skipping, and DRL with worked-example choice. DRL policy reduces training time while maintaining posttest performance.more » « less
-
Learning to derive subgoals reduces the gap between experts and students and prepares students for future problem solving. This paper explores a training strategy using backward worked examples (BWE) and backward problem solving (BPS) within an intelligent logic tutor to support backward strategy learning, with analysis of student experience, performance, and proof construction. Results show that students trained with both BWE and BPS outperform those receiving none or only BWE, demonstrating more efficient subgoal derivation.more » « less
-
Humans adopt various problem-solving strategies depending on their mastery level, problem type, and complexity. Many of these problem-solving strategies have been integrated within intelligent problem-solvers to solve structured and complex problems efficiently. One such strategy is the means-ends analysis which involves comparing the goal and the givens of a problem and iteratively setting up subgoal(s) at each step until the subgoal(s) are straightforward to derive from the givens. However, little is known about the impact of explicitly teaching novices such a strategy for structured problem-solving with tutors. In this study, we teach novices a subgoal-directed problem-solving strategy inspired by means-ends analysis using a problem-based training intervention within an intelligent logic-proof tutor. As we analyzed students’ performance and problem-solving approaches after training, we observed that the students who learned the strategy used it more when solving new problems, constructed optimal logic proofs, and outperformed those who did not learn the strategy.more » « less
-
Problem decomposition into sub-problems or subgoals and recomposition of the solutions to the subgoals into one complete solution is a common strategy to reduce difficulties in structured problem solving. In this study, we use a datadriven graph-mining-based method to decompose historical student solutions of logic-proof problems into Chunks. We design a new problem type where we present these chunks in a Parsons Problem fashion and asked students to reconstruct the complete solution from the chunks. We incorporated these problems within an intelligent logic tutor and called them Chunky Parsons Problems (CPP). These problems demonstrate the process of problem decomposition to students and require them to pay attention to the decomposed solution while they reconstruct the complete solution. The aim of introducing CPP was to improve students’ problem-solving skills and performance by improving their decomposition-recomposition skills without significantly increasing training difficulty. Our analysis showed that CPPs could be as easy as Worked Examples (WE). And, students who received CPP with simple explanations attached to the chunks had marginally higher scores than those who received CPPs without explanation or did not receive them. Also, the normalized learning gain of these students shifted more towards the positive side than other students. Finally, as we looked into their proof-construction traces in posttest problems, we observed them to form identifiable chunks aligned with those found in historical solutions with higher efficiency.more » « less
-
The assistance dilemma is a well-recognized challenge to determine when and how to provide help during problem solving in intelligent tutoring systems. This dilemma is particularly challenging to address in domains such as logic proofs, where problems can be solved in a variety of ways. In this study, we investigate two data-driven techniques to address the when and how of the assistance dilemma, combining a model that predicts \textit{when} students need help learning efficient strategies, and hints that suggest \textit{what} subgoal to achieve. We conduct a study assessing the impact of the new pedagogical policy against a control policy without these adaptive components. We found empirical evidence which suggests that showing subgoals in training problems upon predictions of the model helped the students who needed it most and improved test performance when compared to their control peers. Our key findings include significantly fewer steps in posttest problem solutions for students with low prior proficiency and significantly reduced help avoidance for all students in training.more » « less
-
Within intelligent tutoring systems, hint policies are needed to determine when and how to give hints and what type of hint is most beneficial. In this study, we focus on discovering whether certain hint types influence problem solving behavior. We investigate the influence of two hint types (next-step hints and more abstract high-level hints) on students’ behavior in a college-level logic proof tutor, Deep Thought. The results suggest that hint types can affect student behavior, including hint usage, rule applications, and time in-tutor.more » « less
-
The effectiveness of Intelligent Tutoring Systems (ITSs) often depends upon their pedagogical strategies, the policies used to decide what action to take next in the face of alternatives. We induce policies based on two general Reinforcement Learning (RL) frameworks: POMDP &. MDP, given the limited feature space. We conduct an empirical study where the RL-induced policies are compared against a random yet reasonable policy. Results show that when the contents are controlled to be equal, the MDP-based policy can improve students’ learning significantly more than the random baseline while the POMDP-based policy cannot outperform the later. The possible reason is that the features selected for the MDP framework may not be the optimal feature space for POMDP.more » « less
-
null (Ed.)An important goal in the design and development of Intelligent Tutoring Systems (ITSs) is to have a system that adaptively reacts to students’ behavior in the short term and effectively improves their learning performance in the long term. Inducing effective pedagogical strategies that accomplish this goal is an essential challenge. To address this challenge, we explore three aspects of a Markov Decision Process (MDP) framework through four experiments. The three aspects are: 1) reward function, detecting the impact of immediate and delayed reward on effectiveness of the policies; 2) state representation, exploring ECR-based, correlation-based, and ensemble feature selection approaches for representing the MDP state space; and 3) policy execution, investigating the effectiveness of stochastic and deterministic policy executions on learning. The most important result of this work is that there exists an aptitude-treatment interaction (ATI) effect in our experiments: the policies have significantly different impacts on the particular types of students as opposed to the entire population. We refer the students who are sensitive to the policies as the Responsive group. All our following results are based on the Responsive group. First, we find that an immediate reward can facilitate a more effective induced policy than a delayed reward. Second, The MDP policies induced based on low correlation-based and ensemble feature selection approaches are more effective than a Random yet reasonable policy. Third, no significant improvement was found using stochastic policy execution due to a ceiling effect.more » « less
An official website of the United States government

Full Text Available